learning single index model
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models
We focus on the task of learning a single index model $\sigma(w^\star \cdot x)$ with respect to the isotropic Gaussian distribution in $d$ dimensions. Prior work has shown that the sample complexity of learning $w^\star$ is governed by the information exponent $k^\star$ of the link function $\sigma$, which is defined as the index of the first nonzero Hermite coefficient of $\sigma$. Ben Arous et al. (2021) showed that $n \gtrsim d^{k^\star-1}$ samples suffice for learning $w^\star$ and that this is tight for online SGD. However, the CSQ lower bound for gradient based methods only shows that $n \gtrsim d^{k^\star/2}$ samples are necessary. In this work, we close the gap between the upper and lower bounds by showing that online SGD on a smoothed loss learns $w^\star$ with $n \gtrsim d^{k^\star/2}$ samples. We also draw connections to statistical analyses of tensor PCA and to the implicit regularization effects of minibatch SGD on empirical losses.
Learning Single Index Models with Diffusion Priors
Tang, Anqi, Chen, Youming, Xue, Shuchen, Liu, Zhaoqiang
Diffusion models (DMs) have demonstrated remarkable ability to generate diverse and high-quality images by efficiently modeling complex data distributions. They have also been explored as powerful generative priors for signal recovery, resulting in a substantial improvement in the quality of reconstructed signals. However, existing research on signal recovery with diffusion models either focuses on specific reconstruction problems or is unable to handle nonlinear measurement models with discontinuous or unknown link functions. In this work, we focus on using DMs to achieve accurate recovery from semi-parametric single index models, which encompass a variety of popular nonlinear models that may have {\em discontinuous} and {\em unknown} link functions. We propose an efficient reconstruction method that only requires one round of unconditional sampling and (partial) inversion of DMs. Theoretical analysis on the effectiveness of the proposed methods has been established under appropriate conditions. We perform numerical experiments on image datasets for different nonlinear measurement models. We observe that compared to competing methods, our approach can yield more accurate reconstructions while utilizing significantly fewer neural function evaluations.
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Asia > China (0.04)
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models
We focus on the task of learning a single index model \sigma(w \star \cdot x) with respect to the isotropic Gaussian distribution in d dimensions. Prior work has shown that the sample complexity of learning w \star is governed by the information exponent k \star of the link function \sigma, which is defined as the index of the first nonzero Hermite coefficient of \sigma . Ben Arous et al. (2021) showed that n \gtrsim d {k \star-1} samples suffice for learning w \star and that this is tight for online SGD. However, the CSQ lower bound for gradient based methods only shows that n \gtrsim d {k \star/2} samples are necessary. In this work, we close the gap between the upper and lower bounds by showing that online SGD on a smoothed loss learns w \star with n \gtrsim d {k \star/2} samples.
Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models
We focus on the task of learning a single index model \sigma(w \star \cdot x) with respect to the isotropic Gaussian distribution in d dimensions. Prior work has shown that the sample complexity of learning w \star is governed by the information exponent k \star of the link function \sigma, which is defined as the index of the first nonzero Hermite coefficient of \sigma . Ben Arous et al. (2021) showed that n \gtrsim d {k \star-1} samples suffice for learning w \star and that this is tight for online SGD. However, the CSQ lower bound for gradient based methods only shows that n \gtrsim d {k \star/2} samples are necessary. In this work, we close the gap between the upper and lower bounds by showing that online SGD on a smoothed loss learns w \star with n \gtrsim d {k \star/2} samples.